Based on my reading of peer evaluated papers on usability and discussion with Megan last night, I'm going to try to summarize here a go forward plan.

In most of the studies I read, a two pronged approach to usability evaluation was a consistent theme, of which Megan also saw:

1. Inspection methods:
ie.,Heuristic evaluation, cognitive walkthrough, pluralistic usability walk-through, between websites using criteria. There are many types of
Heuristic evaluation is a usability engineering method in which a smal set of expert evaluators examine a user interface for design problems by judging its compliance with a set of recognized usability principles or "heuristics". Using more than 5 experts does not produce greater results...

We can take this a step farther by doing this comparision against selected peer or benchmark websites, using a scoring style evaluation called Intuitionistic Fuzzy Sets (see the article "Assessing User Trust to Improve Web Usability" on my cite list, Bedi/Banati). As a numbers-freak, I thought this is a neat way to quantitatively assess the difference between websites. Weights are applied to the criteria and then the criteria applied against each other. I think this could provide a more definitive differential between sites versus simple Yes/No checklists. If we wish to stay simple, there is still the other option is the Yes/No method used to score compliance against criteria.

Many of the articles provide criteria lists to assess against.

2. User test methods:
Typically carried out by having a group of users, and probably 5 - 10 is best number, perform against a list of tasks associated to typical use of the website. How they feel/do can be analysed via think aloud/obseravation or post-event interview (indivdual or focus group) methods.

Surveys - well, I found a great article that actually downplayed the applicability of surveys in the situation we are in, and suggested that this method is better suited "towards specific requirement specification" versus user needs analysis.

Again, a few of the articles provided examples of lists of tasks to perform in user tests.

An awesome suggestion for this is that Megan may have access to the prototype of her library's updated website. A user test on this versus the original may be very telling. She is working on whether this is an option within the timing of the project.


So, what are next steps for the team (and my suggested agenda of discussion for tomorrow evening):

1. Agree that we will proceed with the project as per Q#17 and approved by Ms. C - are we all comfortable?

2. Agree upon the type of methods we are going to target, For the purposes of Part 1, my interpretation is that we need to talk to methods available, the ones we choose, what data we want to collect and tools that may be required to do so. We do not need to build upon the details of the assessment/evaluation until Part 2 of the assignement. I think it will be much easier to report on the literature we have found once we have agreed to this direction. I would also suggest that we break off into two "groups" to talk to the different studies.

3. What websites do we include in the benchmarking group - how do we best define what that group is. Are there any articles that suggest this, or do we select our own. While I think demographics/population/budget simularity is an obvious choice, something is nagging me about comparison against sites that have been recognized. Are there any lists of awarding winning, or exemplary sites that include libraries?

BREAKING NEWS - Jamie is going to see if she can cross match the public libraries in the website hall of fame in libsuccess.org against libraries that come up in the US as compatible size/budget/demographics to Meade County Public Library.

4. We have oodles of material now. Let's decide on 12 articles that fit within the scope of our study and each take 4 to write up as already discussed. The other data we have collected will definitely have a place within the project and can be cited as used.

5. I think we need to focus on Q2 and 4 - can we try to flesh these out before or during the meeting...

Determine the scope of the analysis. Determine what will or will not be evaluated. Quoting from page 9 of your text:
a. Are the current patterns of use cause for concern? Has demand for a service dropped off? Has demand suddenly peaked?
b. Will costs have to be determined? Is so, does the budget provide sufficient details to identify all of the cost components for providing the service?
c. Should customers of the library be involved? If so, what will be the manner of their participation?
d. What evaluation methodology and design will be used? How will the data be collected? If a survey is going to be used, is it possible to use one that has been employed by other libraries?
e. What is the purpose for doing the evaluation? Is the library attempting to improve its operational efficiencies (an internal focus), or is the study being done to better understand the effectiveness of a library service (an outward focus)?

Determine the kind of analysis to do
a. Read page 12
b. What methods (models and/or tools) might be most helpful? Discuss several explaining why they might or might not help your project. Select no more than two for use and explain why you prefer them over others. You are actually going to do this study, so choose methods that you have the capability of actually doing.
c. Are you looking at qualitative or quantitative data?
d. What tools might you use? Why?